40 research outputs found

    Collective Asynchronous Remote Invocation (CARI): A High-Level and Effcient Communication API for Irregular Applications

    Get PDF
    The Message Passing Interface (MPI) standard continues to dominate the landscape of parallel computing as the de facto API for writing large-scale scientific applications. But the critics argue that it is a low-level API and harder to practice than shared memory approaches. This paper addresses the issue of programming productivity by proposing a high-level, easy-to-use, and effcient programming API that hides and segregates complex low-level message passing code from the application specific code. Our proposed API is inspired by communication patterns found in Gadget-2, which is an MPI-based parallel production code for cosmological N-body and hydrodynamic simulations. In this paper—we analyze Gadget-2 with a view to understanding what high-level Single Program Multiple Data (SPMD) communication abstractions might be developed to replace the intricate use of MPI in such an irregular application—and do so without compromising the effciency. Our analysis revealed that the use of low-level MPI primitives—bundled with the computation code—makes Gadget-2 diffcult to understand and probably hard to maintain. In addition, we found out that the original Gadget-2 code contains a small handful of—complex and recurring—patterns of message passing. We also noted that these complex patterns can be reorganized into a higherlevel communication library with some modifications to the Gadget-2 code. We present the implementation and evaluation of one such message passing pattern (or schedule) that we term Collective Asynchronous Remote Invocation (CARI). As the name suggests, CARI is a collective variant of Remote Method Invocation (RMI), which is an attractive, high-level, and established paradigm in distributed systems programming. The CARI API might be implemented in several ways—we develop and evaluate two versions of this API on a compute cluster. The performance evaluation reveals that CARI versions of the Gadget-2 code perform as well as the original Gadget-2 code but the level of abstraction is raised considerably

    Teaching Parallel Programming Using Java

    Full text link
    This paper presents an overview of the "Applied Parallel Computing" course taught to final year Software Engineering undergraduate students in Spring 2014 at NUST, Pakistan. The main objective of the course was to introduce practical parallel programming tools and techniques for shared and distributed memory concurrent systems. A unique aspect of the course was that Java was used as the principle programming language. The course was divided into three sections. The first section covered parallel programming techniques for shared memory systems that include multicore and Symmetric Multi-Processor (SMP) systems. In this section, Java threads was taught as a viable programming API for such systems. The second section was dedicated to parallel programming tools meant for distributed memory systems including clusters and network of computers. We used MPJ Express-a Java MPI library-for conducting programming assignments and lab work for this section. The third and the final section covered advanced topics including the MapReduce programming model using Hadoop and the General Purpose Computing on Graphics Processing Units (GPGPU).Comment: 8 Pages, 6 figures, MPJ Express, MPI Java, Teaching Parallel Programmin

    MPJ Express meets YARN:towards Java HPC on Hadoop systems

    Get PDF
    AbstractMany organizations—including academic, research, commercial institutions—have invested heavily in setting up High Performance Computing (HPC) facilities for running computational science applications. On the other hand, the Apache Hadoop software—after emerging in 2005— has become a popular, reliable, and scalable open-source framework for processing large-scale data (Big Data). Realizing the importance and significance of Big Data, an increasing number of organizations are investing in relatively cheaper Hadoop clusters for executing their mission critical data processing applications. An issue here is that system administrators at these sites might have to maintain two parallel facilities for running HPC and Hadoop computations. This, of course, is not ideal due to redundant maintenance work and poor economics. This paper attempts to bridge this gap by allowing HPC and Hadoop jobs to co-exist on a single hardware facility. We achieve this goal by exploiting YARN—Hadoop v2.0—that de-couples the computational and resource scheduling part of the Hadoop framework from HDFS. In this context, we have developed a YARN-based reference runtime system for the MPJ Express software that allows executing parallel MPI-like Java applications on Hadoop clusters. The main contribution of this paper is provide Big Data community access to MPI-like programming using MPJ Express. As an aside, this work allows parallel Java applications to perform computations on data stored in Hadoop Distributed File System (HDFS)

    DIAMOnDS - DIstributed Agents for MObile & Dynamic Services

    Full text link
    Distributed Services Architecture with support for mobile agents between services, offer significantly improved communication and computational flexibility. The uses of agents allow execution of complex operations that involve large amounts of data to be processed effectively using distributed resources. The prototype system Distributed Agents for Mobile and Dynamic Services (DIAMOnDS), allows a service to send agents on its behalf, to other services, to perform data manipulation and processing. Agents have been implemented as mobile services that are discovered using the Jini Lookup mechanism and used by other services for task management and communication. Agents provide proxies for interaction with other services as well as specific GUI to monitor and control the agent activity. Thus agents acting on behalf of one service cooperate with other services to carry out a job, providing inter-operation of loosely coupled services in a semi-autonomous way. Remote file system access functionality has been incorporated by the agent framework and allows services to dynamically share and browse the file system resources of hosts, running the services. Generic database access functionality has been implemented in the mobile agent framework that allows performing complex data mining and processing operations efficiently in distributed system. A basic data searching agent is also implemented that performs a query based search in a file system. The testing of the framework was carried out on WAN by moving Connectivity Test agents between AgentStations in CERN, Switzerland and NUST, Pakistan.Comment: 7 pages, 4 figures, CHEP03, La Jolla, California, March 24-28, 200

    Fat embolism syndrome: a case series and review of literature

    Get PDF
    Fat embolism and fat embolism syndrome (FES) is a clinical spectrum characterized by dissemination of fat emboli into the systematic circulation usually as a result of orthopedic trauma and related surgical procedures. we present a case series of three patients who had FES of variable presentation and severity. In our first case patient initially developed FES pre operatively which was complicated by acute pulmonary thromboembolism in the post operative period. In our third case patient developed FES after intra medullary nail fixation of femoral shaft fracture. Fat embolism is relatively rare but fatal complication in orthopedic trauma and during long bone fracture manipulations. In addition, fat embolism is a risk factor for pulmonary thromboembolism as was evident in our first case. So, patients of fat embolism should be closely monitored for the later. Gurd and Wilson are the most commonly used criteria for the diagnosis of FES. Treatment is largely supportive and some preventive measures include early fixation of long bone fractures. Prophylactic use of steroids in a meta-analysis has been found to prevent occurrence of FES in nearly two third of patients. There is no proven role of hypertonic dextrose infusion, heparin or corticosteroids in the treatment of FES and therefore are not routinely recommended. In case of fulminant FES steroids should be considered

    A parallel implementation of the Finite-Domain Time-Difference algorithm using MPJ express

    Full text link
    This paper presents and evaluates a parallel Java imple-mentation of the Finite-Difference Time-Domain (FDTD) method, which is a widely used numerical technique in computational electrodynamics. The Java version is par-allelized using MPJ Express—a thread-safe messaging li-brary. MPJ Express provides a full implementation of the mpiJava 1.2 API specification. This specification defines a MPI-like binding for the Java language. This paper de-scribes our experiences of implementing the Java version of the FDTD method. Towards the end of this paper, we evaluate and compare the performance of the Java version against its C counterpart on a 32 processing core Linux cluster of eight compute nodes.

    Investigation on the stability and efficiency of MAPbI3 and MASnI3 thin films for Solar Cells

    Full text link
    [EN] Hybrid organic-inorganic halides are considered as outstanding materials when used as the absorber layer in perovskite solar cells (PSCs) because of its efficiency, relieve of fabrication and low-cost materials. However, the content of lead (Pb) in the material may origin a dramatic after effect on human's health caused by its toxicity. Here, we investigate replacing the lead in MAPbI(3) with tin (Sn) to show its influence on the growth of the film nucleation and stability of the solar device based on MASnI(3). By analysing the manufactured perovskite films by scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD), UV-visible absorption, photoluminescence (PL) and atomic force microscopy (AFM), the properties of the thin films when lead is replaced by tin are reported. The simulation run for the case of MAPbI(3) is reported, where V-oc = 0.856 V, J(sc) = 25.65 mA cm(-2), FF = 86.09%, and ETA = 18.91%, and for MASnI(3,) V-oc = 0.887 V, J(sc) = 14.02 mA cm(-2), FF = 83.72%, and ETA = 10.42%. In perovskite-based devices using MASnI(3) as absorber, it was found to be more stable despite of its lower efficiency, which could be improved by enhancing the bandgap alignment of MaSnI(3). The results of this paper also allow the development of a new, reliable production system for PSCs.This research was funded by grant PID2019-107137RB-C21 funded by MCIN/AEI/10.13039/501100011033 and by "ERDF A way of making Europe."MarĂ­-Guaita, J.; Bouich, A.; Shafi, MA.; Bouich, A.; MarĂ­, B. (2022). Investigation on the stability and efficiency of MAPbI3 and MASnI3 thin films for Solar Cells. physica status solidi (a). 219(5):1-7. https://doi.org/10.1002/pssa.20210066417219

    Device level communication libraries for high‐performance computing in Java

    Get PDF
    This is the peer reviewed version of the following article: Taboada, G. L., Touriño, J. , Doallo, R. , Shafi, A. , Baker, M. and Carpenter, B. (2011), Device level communication libraries for high‐performance computing in Java. Concurrency Computat.: Pract. Exper., 23: 2382-2403. doi:10.1002/cpe.1777, which has been published in final form at https://doi.org/10.1002/cpe.1777. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Use of Self-Archived Versions.[Abstract] Since its release, the Java programming language has attracted considerable attention from the high‐performance computing (HPC) community because of its portability, high programming productivity, and built‐in multithreading and networking support. As a consequence, several initiatives have been taken to develop a high‐performance Java message‐passing library to program distributed memory architectures, such as clusters. The performance of Java message‐passing applications relies heavily on the communications performance. Thus, the design and implementation of low‐level communication devices that support message‐passing libraries is an important research issue in Java for HPC. MPJ Express is our Java message‐passing implementation for developing high‐performance parallel Java applications. Its public release currently contains three communication devices: the first one is built using the Java New Input/Output (NIO) package for the TCP/IP; the second one is specifically designed for the Myrinet Express library on Myrinet; and the third one supports thread‐based shared memory communications. Although these devices have been successfully deployed in many production environments, previous performance evaluations of MPJ Express suggest that the buffering layer, tightly coupled with these devices, incurs a certain degree of copying overhead, which represents one of the main performance penalties. This paper presents a more efficient Java message‐passing communications device, based on Java Input/Output sockets, that avoids this buffering overhead. Moreover, this device implements several strategies, both in the communication protocol and in the HPC hardware support, which optimizes Java message‐passing communications. In order to evaluate its benefits, this paper analyzes the performance of this device comparatively with other Java and native message‐passing libraries on various high‐speed networks, such as Gigabit Ethernet, Scalable Coherent Interface, Myrinet, and InfiniBand, as well as on a shared memory multicore scenario. The reported communication overhead reduction encourages the upcoming incorporation of this device in MPJ ExpressMinisterio de Ciencia e InnovaciĂłn; TIN2010-16735
    corecore